scenario-notebooks/Export Historical Log Data.ipynb (1,305 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "source": [ "# Historical Data Export\r\n", "\r\n", "__Notebook Version:__ 1.0<br>\r\n", "__Python Version:__ Python 3.8<br>\r\n", "__Required Packages:__ azure-monitor-query, azure-storage-file-datalake, azureml-synapse<br>\r\n", "__Platforms Supported:__ Azure Machine Learning Notebooks connected to Azure Synapse Workspace\r\n", " \r\n", "__Data Source Required:__ Yes\r\n", "\r\n", "__Data Source:__ Azure Log Analytics\r\n", "\r\n", "__Spark Version:__ 3.1 or above\r\n", " \r\n", "## Description\r\n", "\r\n", "Use this notebook to export of historical data in your Log Analytics workspace. \r\n", "(This notebook extends the **continuous log export tool** in Microsoft Sentinel (see [docs](https://docs.microsoft.com/azure/azure-monitor/logs/logs-data-export?tabs=portal) \r\n", "and [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/configure-a-continuous-data-pipeline-in-microsoft-sentinel-for/ba-p/3242605)), \r\n", "with historical logs exported using the same data format and partition scheme as the continuously exported logs.) \r\n", "\r\n", "In this notebook, we demo the **one-time export of historical log data** from the _SigninLogs_ table, but this can easily be modified to target **any subset of log data**. The notebook only needs to be **run once**, following which exported logs can be used in Sentinel notebooks for large scale data analytics and machine learning on log data (see the Sentinel template notebooks for more details). \r\n", "Alternatively, you may wish to use this tool for archiving older logs in more cost-effective archive-tier Azure storage.\r\n", "\r\n", "This notebook also makes use of the **Azure Synapse integration** for Sentinel notebooks to enable partitioning and writing of data files at scale. To set up the Synapse integration for Sentinel noteboks, please follow the instructions [here](https://docs.microsoft.com/azure/sentinel/notebooks-with-synapse).\r\n", "\r\n", "## Steps\r\n", "\r\n", "1. Fetch data from the Sentinel workspace using the `azure-monitor-query` Python package. \r\n", " - The requested data may be the entirety of a given table, or based on a custom KQL query\r\n", " - Querying and fetching of data is automatically chunked and run asynchronously to avoid API throttling issues\r\n", "\r\n", "2. Write data to Azure Data Lake Gen2\r\n", "\r\n", "3. Use Apache Spark (via Azure Synapse) to repartition data to match the partition scheme created by the continuous log export tool\r\n", " - The continuous log export tool created a separate nested directory for each (year, month, day, hour, 5-minute interval) tuple in the format: **`{base_path}/y=<year>/m=<month>/d=<day>/h=<hour>/m=<5-minute-interval>`**\r\n", " - For a year's worth of historical log data, we may be writing over 100,000 separate data files, so we rely on Spark's multi-executor parallelism to do this efficiently\r\n", "\r\n", "4. Use the `azure-storage-file-datalake` Python package to do rename a few high-level directories to match the partition scheme used by the continuous log export tool\r\n", "\r\n", "5. Clean up any data stored in intermediate locations during the data ETL process" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "# Install Pre-Requisite Packages" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "import sys\r\n", "\r\n", "# Install Azure Monitor Query client package to query Sentinel log data\r\n", "!{sys.executable} -m pip install --upgrade azure-monitor-query" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": true }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1651776730966 } } }, { "cell_type": "code", "source": [ "# Install Azure storage datalake library to manipulate file systems\r\n", "!{sys.executable} -m pip install --upgrade azure-storage-file-datalake --pre" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "# Install AzureML Synapse package to use spark magic\r\n", "!{sys.executable} -m pip install --upgrade azureml-synapse" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "# 0. Setup" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "Authenticate and connect to your Log Analytics workspace. The workspace ID is loaded from the `config.json` file which will have been created when you set up your first Sentinel notebook." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "import asyncio\r\n", "import io\r\n", "import itertools\r\n", "import json\r\n", "from math import ceil\r\n", "import os\r\n", "from pathlib import PurePosixPath\r\n", "\r\n", "\r\n", "import pandas as pd\r\n", "from datetime import datetime, timedelta, timezone\r\n", "\r\n", "\r\n", "from azure.core.exceptions import HttpResponseError\r\n", "from azure.monitor.query import LogsQueryStatus, LogsBatchQuery\r\n", "from azure.monitor.query.aio import LogsQueryClient\r\n", "from azure.identity.aio import DefaultAzureCredential\r\n", "from azure.storage.filedatalake import DataLakeServiceClient\r\n", "from azureml.core import Workspace, LinkedService\r\n", "\r\n", "\r\n", "# Credential used to authenticate to Log Analytics API\r\n", "credential = DefaultAzureCredential()\r\n", "logs_client = LogsQueryClient(credential)\r\n", "\r\n", "with open(\"config.json\", \"r\") as f:\r\n", " LOG_ANALYTICS_WORKSPACE_ID = json.load(f).get(\"workspace_id\", \"\")\r\n", "\r\n", "# Use this if a `config.json` has not been created automatically:\r\n", "# LOG_ANALYTICS_WORKSPACE_ID = '<your Log Analytics / Sentinel workspace ID>'" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652469238932 } } }, { "cell_type": "markdown", "source": [ "We now define functions that we will use to query and fetch the historical log data to export/archive. We use the `azure-monitor-query` Python package to invoke the Log Analytics REST API. \r\n", "Queries are chunked by time range to avoid [throttling and truncation of returned data](https://docs.microsoft.com/azure/azure-monitor/service-limits#log-analytics-workspaces) and the chunked API calls are executed asynchronously.\r\n", "\r\n", "**Edit the KQL query in the cell below as appropriate to specify which logs/columns to query; to specify all the available logs for a given table, just set `QUERY = <name-of-table>`.**" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "# EDIT THIS KQL AS REQUIRED!!!\r\n", "QUERY = \"SigninLogs\" # Add any `project` statement and/or filtering here!!!" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652469090747 } } }, { "cell_type": "code", "source": [ "async def async_execute_query(\r\n", " query: str, \r\n", " start_time: datetime, \r\n", " end_time: datetime, \r\n", " *, \r\n", " retries: int = 1, \r\n", " query_num: int = None, \r\n", " print_message: str = None\r\n", "):\r\n", " \"\"\"\r\n", " Asyncrhonously execute the given query, restricted to the given time range, and parse the API response\r\n", " \"\"\"\r\n", " if query_num is None:\r\n", " query_num = \"\"\r\n", "\r\n", " for i in range(retries + 1):\r\n", " if print_message is not None:\r\n", " print(print_message)\r\n", " try:\r\n", " response = await logs_client.query_workspace(\r\n", " workspace_id=LOG_ANALYTICS_WORKSPACE_ID,\r\n", " query=query,\r\n", " timespan=(start_time, end_time),\r\n", " server_timeout=600, \r\n", " # include_statistics=True, # Use for debugging!\r\n", " )\r\n", " except HttpResponseError as e:\r\n", " print(f\"Fatal error when attempting query {query_num} (query time span: {start_time} to {end_time}):\\n\\t\", e)\r\n", " print_message = f\"Attempt {i + 2} of {retries + 1}. {print_message}\"\r\n", " continue\r\n", "\r\n", " if response.status == LogsQueryStatus.SUCCESS:\r\n", " print(f\"Query {query_num} successful (query time span: {start_time} to {end_time})\")\r\n", " return response.tables[0]\r\n", " elif response.status == LogsQueryStatus.PARTIAL:\r\n", " # this will be a LogsQueryPartialResult\r\n", " error = response.partial_error\r\n", " print(f\"Partial results returned for query {query_num} (query time span: {start_time} to {end_time}):\\n\\t\", error.message)\r\n", " if i == retries + 1:\r\n", " return response.partial_data[0]\r\n", " elif response.status == LogsQueryStatus.FAILURE:\r\n", " # this will be a LogsQueryError\r\n", " print(f\"Query {query_num} failed (query time span: {start_time} to {end_time}):\\n\\t\", response.message)\r\n", " else:\r\n", " print(f\"Unknown error in query {query_num} (query time span: {start_time} to {end_time})\")\r\n", "\r\n", " print_message = f\"Attempt {i + 2} of {retries + 1}. {print_message}\"\r\n", "\r\n", "\r\n", "async def batch_endpoints_by_row_count(\r\n", " end_time: datetime, \r\n", " days_back: int, \r\n", " max_rows_per_query: int = int(1e5), # Maximum is supposedly 500,000, but that doesn't seem correct\r\n", " time_col: str = \"TimeGenerated\",\r\n", "):\r\n", " \"\"\"\r\n", " Determine the timestamp endpoints for each chunked query, \r\n", " such that number of rows returned by each query is (approximately) `max_rows_rows_per_query`\r\n", " \"\"\"\r\n", " find_batch_endpoints_query = f\"\"\"\r\n", " {QUERY}\r\n", " | sort by {time_col} desc\r\n", " | extend batch_num = row_cumsum(1) / {max_rows_per_query}\r\n", " | summarize endpoint=min({time_col}) by batch_num\r\n", " | sort by batch_num asc\r\n", " | project endpoint\r\n", " \"\"\"\r\n", " \r\n", " start_time = end_time - timedelta(days=days_back)\r\n", " response = await logs_client.query_workspace(\r\n", " workspace_id=LOG_ANALYTICS_WORKSPACE_ID,\r\n", " query=find_batch_endpoints_query,\r\n", " timespan=(start_time, end_time)\r\n", " )\r\n", "\r\n", " batch_endpoints = [end_time]\r\n", " batch_endpoints += [row[0] for row in response.tables[0].rows]\r\n", " return batch_endpoints\r\n", "\r\n", "\r\n", "async def auto_find_batch_endpoints(\r\n", " end_time: datetime, \r\n", " days_back: int, \r\n", " time_col: str = \"TimeGenerated\",\r\n", "):\r\n", " \"\"\"\r\n", " Determine the timestamp endpoints for each chunked query, \r\n", " such that number of rows returned by each query is less that the API limit (500K)\r\n", " and the size of the data returned is less than the API limit (~100 MiB).\r\n", "\r\n", " Aims to achieve the above without creating an excessive number of chunks - \r\n", " worst case performance is double the theoretical minimum number of queries necessary\r\n", " \"\"\"\r\n", " max_bytes_per_query = 100 * 1024 * 1024 # 100 MiB\r\n", " max_bytes_per_query = int(0.8 * max_bytes_per_query) # Limit is not exact (depends on data compression ratio), so leaving so wiggle-room here\r\n", "\r\n", " max_rows_per_query = int(5e5) # 500K\r\n", " max_rows_per_query = int(0.9 * max_rows_per_query) # Sometimes we will go over the limit if many events share the same timestamp\r\n", "\r\n", " find_batch_endpoints_by_data_limit_query = f\"\"\"\r\n", " {QUERY}\r\n", " | sort by {time_col} desc\r\n", " | extend batch_num = row_cumsum(estimate_data_size(*)) / {max_bytes_per_query}\r\n", " | summarize endpoint=min({time_col}) by batch_num\r\n", " | sort by batch_num asc\r\n", " | project endpoint\r\n", " \"\"\"\r\n", " \r\n", " start_time = end_time - timedelta(days=days_back)\r\n", " response = await logs_client.query_workspace(\r\n", " workspace_id=LOG_ANALYTICS_WORKSPACE_ID,\r\n", " query=find_batch_endpoints_by_data_limit_query,\r\n", " timespan=(start_time, end_time)\r\n", " )\r\n", "\r\n", " batch_endpoints_by_data_limit = [end_time]\r\n", " batch_endpoints_by_data_limit += [row[0] for row in response.tables[0].rows]\r\n", "\r\n", " batch_endpoints_by_row_limit = await batch_endpoints_by_row_count(end_time, days_back, max_rows_per_query)\r\n", "\r\n", " batch_endpoints = sorted(batch_endpoints_by_data_limit + batch_endpoints_by_row_limit, reverse=True)\r\n", " return batch_endpoints\r\n", "\r\n", "\r\n", "async def fetch_logs(\r\n", " end_time: datetime, \r\n", " days_back: int, \r\n", " *,\r\n", " auto_batch: bool = False, \r\n", " batch_size_rows: int = None, \r\n", " batch_size_days:float = None,\r\n", "):\r\n", " \"\"\"\r\n", " Batch the query into time intervals of size `batch_size_days` and perform asyncrhonous Log Analytics API calls for each batch.\r\n", "\r\n", " Returns a pandas dataframe containing queries data.\r\n", " \"\"\"\r\n", " query = QUERY\r\n", "\r\n", " if auto_batch:\r\n", " print(\"Dynamically determining how to chunk up queries based on API row and data limits...\")\r\n", " batch_endpoints = await auto_find_batch_endpoints(end_time, days_back)\r\n", " num_queries = len(batch_endpoints) - 1\r\n", " elif batch_size_rows is not None:\r\n", " print(\"Dynamically determining how to chunk up queries...\")\r\n", " batch_endpoints = await batch_endpoints_by_row_count(end_time, days_back, batch_size_rows)\r\n", " num_queries = len(batch_endpoints) - 1\r\n", " elif batch_size_days is not None:\r\n", " num_queries = ceil(days_back / batch_size_days)\r\n", " batch_endpoints = [end_time - (i * timedelta(days=batch_size_days)) for i in range(num_queries + 1)]\r\n", " else:\r\n", " print(\"one of the parameters, `auto_batch`, `batch_size_rows`, `batch_size_days` must be set\")\r\n", " return\r\n", "\r\n", " async_queries = []\r\n", " for i in range(num_queries):\r\n", " query_end_time = batch_endpoints[i]\r\n", " query_start_time = batch_endpoints[i + 1]\r\n", " print_message = f\"Submitting Query {i + 1} of {num_queries}: {query_start_time} to {query_end_time}\"\r\n", " async_queries.append(async_execute_query(query, query_start_time, query_end_time, print_message=print_message, query_num=i+1))\r\n", " \r\n", " # Use this instead of the above query for debugging issues with API limits - this will return the row count and data size of each chunk (instead of the data itself)\r\n", " # async_queries.append(async_execute_query(f\"{query} | summarize query_num = max({i}), count(), est_data_size = sum(estimate_data_size(*))\", query_start_time, query_end_time))\r\n", "\r\n", " results = await asyncio.gather(*async_queries)\r\n", " columns = results[0].columns\r\n", " rows = itertools.chain.from_iterable([table.rows for table in results if table is not None])\r\n", " df = pd.DataFrame(columns=columns, data=rows)\r\n", " return df\r\n", "\r\n", "async def estimate_data_size(\r\n", " end_time: datetime, \r\n", " days_back: int,\r\n", "):\r\n", " query = f\"{QUERY} | summarize n_rows = count(), estimate_data_size = sum(estimate_data_size(*))\"\r\n", " start_time = end_time - timedelta(days=days_back)\r\n", " response = await logs_client.query_workspace(\r\n", " workspace_id=LOG_ANALYTICS_WORKSPACE_ID,\r\n", " query=query,\r\n", " timespan=(start_time, end_time)\r\n", " )\r\n", "\r\n", " columns = response.tables[0].columns\r\n", " rows = response.tables[0].rows\r\n", " df = pd.DataFrame(columns=columns, data=rows)\r\n", " return df" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652469287172 } } }, { "cell_type": "markdown", "source": [ "Here, we define the functions that will be used to interact with the ADLS storage account(s) to which we want to export our historical log data. These use the `azure-storage-file-datalake` Python package which uses the [ADLS Gen2 REST API](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-directory-file-acl-python) under the hood.\r\n", "\r\n", "**Nothing to change here (just boilerplate) - you can just run this cell as-is!**" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "# This pattern avoids re-authenticating every time the ADLS client needs to be used\r\n", "global adls_service_client\r\n", "adls_service_client = None\r\n", "\r\n", "def get_adls_client(storage_account_name, storage_account_key, force_new_client=False):\r\n", " \"\"\"\r\n", " Authenticates to ADLS API and instantiates an ADLS client object.\r\n", " Subsequent calls to this function return the same client object unless `force_new_client` is set.\r\n", " \"\"\"\r\n", "\r\n", " # Authenticate and instantiate new ADLS client the first time this functino is called\r\n", " if (not force_new_client) and (adls_service_client is not None):\r\n", " return adls_service_client\r\n", " \r\n", " try:\r\n", " return DataLakeServiceClient(\r\n", " account_url='{}://{}.dfs.core.windows.net'.format(\r\n", " 'https', storage_account_name\r\n", " ),\r\n", " credential=storage_account_key,\r\n", " )\r\n", " except Exception as e:\r\n", " print(e)\r\n", "\r\n", "\r\n", "def upload_df_to_adls_path(\r\n", " df: pd.DataFrame, \r\n", " adls_dirname: str,\r\n", " adls_filename: str,\r\n", " container_name: str,\r\n", " storage_account_name: str, \r\n", " storage_account_key: str,\r\n", "):\r\n", " \"\"\"\r\n", " Upload a pandas dataframe to the specified ADLS path as a single JSON lines files\r\n", " \"\"\"\r\n", " json_data = df.to_json(orient='records', lines=True, date_format='iso')\r\n", "\r\n", " adls_service_client = get_adls_client(storage_account_name, storage_account_key)\r\n", " file_system_client = adls_service_client.get_file_system_client(file_system=container_name)\r\n", "\r\n", " try:\r\n", " file_system_client.create_directory(adls_dirname)\r\n", " except Exception as e:\r\n", " print(e)\r\n", "\r\n", " try:\r\n", " directory_client = file_system_client.get_directory_client(adls_dirname)\r\n", " file_client = directory_client.get_file_client(adls_filename)\r\n", " file_client.upload_data(json_data, overwrite=True)\r\n", " except Exception as e:\r\n", " print(e)\r\n", "\r\n", "\r\n", "def delete_directory(\r\n", " adls_dirname: str,\r\n", " container_name: str,\r\n", " storage_account_name: str, \r\n", " storage_account_key: str,\r\n", "):\r\n", " \"\"\"\r\n", " Delete the specified ALDS directory\r\n", " \"\"\"\r\n", " try:\r\n", " adls_service_client = get_adls_client(storage_account_name, storage_account_key)\r\n", " file_system_client = adls_service_client.get_file_system_client(file_system=container_name)\r\n", " directory_client = file_system_client.get_directory_client(adls_dirname)\r\n", " directory_client.delete_directory()\r\n", " except Exception as e:\r\n", " print(e)\r\n", " \r\n", "\r\n", "def get_month_partition_dirs(\r\n", " adls_root_search_path: str, \r\n", " container_name: str,\r\n", " storage_account_name: str, \r\n", " storage_account_key: str,\r\n", "):\r\n", " \"\"\"\r\n", " Searches for all directories under the `adls_root_search_path` that have the prefix 'month='.\r\n", "\r\n", " Returns a list of matching directory paths.\r\n", " \"\"\"\r\n", " month_directories = []\r\n", " try:\r\n", " adls_service_client = get_adls_client(storage_account_name, storage_account_key)\r\n", " file_system_client = adls_service_client.get_file_system_client(file_system=container_name)\r\n", " paths_response = file_system_client.get_paths(path=adls_root_search_path)\r\n", "\r\n", " except Exception as e:\r\n", " print(e)\r\n", " return\r\n", "\r\n", " for path in paths_response:\r\n", " if path.is_directory and path.name.split('/')[-1].startswith('month='):\r\n", " month_directories.append(path.name)\r\n", "\r\n", " return month_directories\r\n", "\r\n", "\r\n", "def rename_month_partition_dirs(\r\n", " adls_root_search_path: str, \r\n", " container_name: str,\r\n", " storage_account_name: str, \r\n", " storage_account_key: str,\r\n", "):\r\n", " \"\"\"\r\n", " Searches for all directories under the `adls_root_search_path` that have the prefix 'month=' and replaces the prefix with 'm='.\r\n", " (This is used to modify the partition scheme produced by PySpark to match the naming scheme used by the Sentinel continuous data export tool.)\r\n", " \"\"\"\r\n", " month_directories = get_month_partition_dirs(adls_root_search_path, container_name, storage_account_name, storage_account_key)\r\n", "\r\n", " for month_dir in month_directories:\r\n", " path = PurePosixPath(month_dir)\r\n", " if not path.stem.startswith('month='):\r\n", " continue\r\n", " \r\n", " new_stem = f'm={path.stem[len(\"month=\"):]}'\r\n", " new_path = str(PurePosixPath(path.parent, new_stem))\r\n", " \r\n", " print(f'Renaming {path} to {new_path}')\r\n", " try:\r\n", " adls_service_client = get_adls_client(storage_account_name, storage_account_key)\r\n", " file_system_client = adls_service_client.get_file_system_client(file_system=container_name)\r\n", " directory_client = file_system_client.get_directory_client(month_dir)\r\n", " directory_client.rename_directory(new_name=directory_client.file_system_name + '/' + new_path)\r\n", " except Exception as e:\r\n", " print(e)" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652469092391 } } }, { "cell_type": "markdown", "source": [ "# 1. Fetching Log Data\r\n", "\r\n", "You may wish to estimate costs that will be incurred on your Azure storage account before beginning the data export process. The query in the cell below will estimate the size of the data to be exported along with the number of blobs that would be written \r\n", "if you choose to partition data into 5-minute intervals (as is done by the continuous log export). \r\n", "Use this in conjunction with the Azure storage [pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage) to determine costs that will be incurred for your storage setup. \r\n", "(Full billing details for ADLS Gen2 can be found [here](https://azure.microsoft.com/pricing/details/storage/data-lake/).)\r\n", "\r\n", "Use the query in the next cell to inform how much data to export." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "# Set values as appropriate\r\n", "end_time = datetime.strptime(\"2022-04-01 00:00:00 +0000\", \"%Y-%m-%d %H:%M:%S %z\")\r\n", "days_back = 30 # How far back before the `end_date` to query logs\r\n", "\r\n", "data_size_df = await estimate_data_size(end_time, days_back) # type: ignore\r\n", "data_size_df[\"estimate_data_size_MB\"] = data_size_df[\"estimate_data_size\"] / (1000 **2)\r\n", "data_size_df[\"estimate_data_size_GB\"] = data_size_df[\"estimate_data_size_MB\"] / 1000\r\n", "\r\n", "# Section 3 of this notebook partitions data into 5 minute buckets by default (to match the continuous data export)\r\n", "data_size_df[\"n_blobs_if_using_5_min_partitions\"] = ceil(timedelta(days=days_back) / timedelta(minutes=5))\r\n", "\r\n", "data_size_df" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652469819697 } } }, { "cell_type": "markdown", "source": [ "This notebook fetches historical log data (via asynchronous, chunked queries to the log analytics [REST API](https://docs.microsoft.com/python/api/overview/azure/monitor-query-readme?view=azure-python)). \r\n", "\r\n", "Variables to set:\r\n", "- `end_time` - The datetime up to which logs should be fetched. If you have already set up a continuous export pipeline, **ensure that this is set to an earlier date than the start of the continuously exported data**.\r\n", "- `days_back` - How far back before the `end_date` to query logs \r\n", "- `batch_size_rows` OR `batch_size_days` - How many rows/fractional days of data to query in each API call\r\n", "\r\n", "Log Analytics workspace and log queries in Azure Monitor are multitenancy services that include limits to protect and isolate customers and to maintain quality of service. When querying for a large amount of data, you should consider the following limits\r\n", "\r\n", "| Limit _(as of May 2022)_ | Remedy |\r\n", "|------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\r\n", "| Maximum of 500,000 rows returned per query | Set `batch_size_rows` to <490K (or reduce value of `batch_size_days`) |\r\n", "| Maximum of ~100 MiB of data returned per query | - Reduce the set of columns being exported. This can have a significant performance impact!<br>- If number of columns can't be reduced, reduce `batch_size_rows`/`batch_size_days` |\r\n", "| Maximum of 200 requests per 30 seconds | - Increase `batch_size_rows`/`batch_size_days`<br>- Reduce `days_back`<br>- Try again! (Some 'Server Disconnected' errors are ephemeral and not due to the rate limit being hit) |\r\n", "\r\n", "**EXPERIMENTAL:** Setting `auto_batch = True` will cause the query function to intelligently chunk queries based on rows returned and estimated data size, so as to avoid API limits.\r\n", "\r\n", "Also consider that the fetched data will (initially) be held in memory on your Azure ML compute instance; depending on the volume of historical logs you wish to export, you may reach VM memory limits.<br>\r\n", "If this is a problem, consider exporting the data in chunks - e.g. instead of exporting 365 days of data at once export 100 days of data at a time by setting the values of `end_time` and `days_back` appropriately and re-running the notebook from this cell onwards for each separate chunk.<br>\r\n", "Alternatively, use a compute instance with more memory to run this notebook (such as the [Azure E-Series VMs](https://azure.microsoft.com/pricing/details/virtual-machines/series/)).\r\n", "\r\n", "\r\n", "> **Note:** This cell may take a while to run!" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "# Set values as appropriate\r\n", "end_time = datetime.strptime(\"2022-04-01 00:00:00 +0000\", \"%Y-%m-%d %H:%M:%S %z\")\r\n", "days_back = 30 # How far back before the `end_date` to query logs\r\n", "\r\n", "# Use one of the strategies below to chunk queries OR use `auto_batch = True`\r\n", "batch_size_rows = int(4e5) # How many rows of data to return in each API call, max. 500K (see notes above)\r\n", "batch_size_days = 3 # How many (fractional) days of data to query in each API call (see notes above)\r\n", "\r\n", "df = await fetch_logs(\r\n", " end_time=end_time, \r\n", " days_back=days_back, \r\n", " auto_batch=True,\r\n", " # batch_size_rows=batch_size_rows,\r\n", " # batch_size_days=batch_size_days,\r\n", ") # type: ignore\r\n", "\r\n", "print(\"# Rows retrieved:\", len(df))\r\n", "with pd.option_context(\"display.max_columns\", None):\r\n", " df.head()" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652108565376 } } }, { "cell_type": "markdown", "source": [ "> **Before continuing**, check the output from the cell above:\r\n", "> - Ensure that the dataframe in the output of the cell above contains the expected data\r\n", "> - Ensure that there are no query failure or data truncation error messages in the output of the cell above" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "# 2. Write Data to ADLS\r\n", "\r\n", "In the cell below, specify the details of the storage account and file to which to write the log data dataframe. This will write a single JSON file containing the historical log data. \r\n", "This is only used a temporary staging location before the data is repartitioned across a large number of smaller files using Spark.\r\n", "\r\n", "> **Note:** Your Azure Synapse workspace will need to be able to access this ADLS location in the next step.\r\n" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "staging_account_name = '<storage account name>' # fill in your primary account name\r\n", "staging_container_name = '<container name>' # fill in your container name\r\n", "staging_dirname = \"historical-log-dump\" #'<name of directory to write to>' # fill in your directory name\r\n", "staging_filename = \"signin_logs.json\" #'<name of file to write to>' # fill in your file name (include the .json extension)\r\n", "\r\n", "# In production, make sure any keys are stored and retrieved securely (e.g. using Azure Key Vault) - don't store keys as plaintext!\r\n", "staging_account_key = '<storage-account-key>' # replace your storage account key\r\n", "\r\n", "upload_df_to_adls_path(\r\n", " df,\r\n", " adls_dirname=staging_dirname,\r\n", " adls_filename=staging_filename,\r\n", " container_name=staging_container_name,\r\n", " storage_account_name=staging_account_name,\r\n", " storage_account_key=staging_account_key,\r\n", ")" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652112397740 } } }, { "cell_type": "markdown", "source": [ "# 3. Repartition Data Using Spark\r\n", "\r\n", "Now we use PySpark to repartition the exported log data by timestamp in the following format:\r\n", "\r\n", " {base_path}/y=<year>/m=<month>/d=<day>/h=<hour>/m=<5-minute-interval>\r\n", "\r\n", "We also write to the same location used by the continuous log export: the `base_path` used by the continuous log export is:\r\n", "\r\n", " WorkspaceResourceId=/subscriptions/{subscription_id}/resourcegroups/{resource_group.lower()}/providers/microsoft.operationalinsights/workspaces/{workspace_name.lower()}\r\n", "\r\n", "Storing the exported logs data like this has two benefits:\r\n", "\r\n", "- This matches the partition scheme used by continuously exported logs; this means that means that continuously exported data and historical log data can be read from in a unified way by any notebooks or data pipelines that consume this data\r\n", "- Partitioning by timestamps can allow for efficient querying of file data - by encoding the timestamp values in file paths, we can minimise the number of required file reads when loading data from a specific time range in downstream tasks \r\n" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "### Configure Azure ML and Azure Synapse Analytics\r\n", "\r\n", "To run this Synapse PySpark code, the Synapse integration for AML notebooks needs to be set up: please refer to the template notebook, [Configurate Azure ML and Azure Synapse Analytics](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Configurate%20Azure%20ML%20and%20Azure%20Synapse%20Analytics.ipynb), to configure environment. \r\n", "\r\n", "The notebook will configure existing Azure synapse workspace to create and connect to Spark pool. You can then create linked service and connect AML workspace to Azure Synapse workspaces.\r\n", "\r\n", "> **Note**: Specify the input parameters in below step in order to connect AML workspace to synapse workspace using linked service." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "aml_workspace = '<aml workspace name>' # fill in your AML workspace name\r\n", "subscription_id = '<subscription id>' # fill in your subscription id\r\n", "resource_group = '<resource group of AML workspace>' # fill in your resource groups for AML workspace\r\n", "linkedservice = '<linked service name>' # fill in your linked service created to connect to synapse workspace" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652103162407 } } }, { "cell_type": "markdown", "source": [ "### Start Spark Session\r\n", "Enter your Synapse Spark compute below. To find the Spark compute, please follow these steps:\r\n", "\r\n", "1. On the AML Studio left menu, navigate to **Linked Services**\r\n", "2. Click on the name of the Link Service you want to use\r\n", "3. Select **Spark pools** tab\r\n", "4. Get the name of the Spark pool you want to use" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "from azureml.core import Workspace, LinkedService\r\n", "\r\n", "\r\n", "# Get the aml workspace\r\n", "aml_workspace = Workspace.get(name=aml_workspace, subscription_id=subscription_id, resource_group=resource_group)\r\n", "\r\n", "# Retrieve a known linked service\r\n", "linked_service = LinkedService.get(aml_workspace, linkedservice)\r\n", "\r\n", "# Enter the name of the attached Spark pool\r\n", "synapse_spark_compute = input('Synapse Spark compute:')\r\n", "\r\n", "# Start Spark session\r\n", "%synapse start -s $subscription_id -w $aml_workspace -r $resource_group -c $synapse_spark_compute" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1652103184815 } } }, { "cell_type": "markdown", "source": [ "### Repartition Data Using PySpark\r\n", "\r\n", "Having started the Spark session, we can run PySpark code by starting a cell with the `%%synapse` line magic. \r\n", "Doing this part of the data ETL process using Spark allows the partitioning and writing of data to be hugely parallelized - for a year's worth of log data, we may be creating over 100,000 data files (one per partition)." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "First we read in the historical log data from ADLS (where we exported the data to in step 2.) into the Spark context.\r\n", "\r\n", "> Fill in the details of the sotrage account, ADLS container and directory/file to which the historical logs were exported in step 2." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "%%synapse\r\n", "\r\n", "# Fill in the details of the ADLS location where we have dumped the historical log data\r\n", "staging_account_name = '<storage account name>' # fill in your primary account name\r\n", "staging_container_name = '<container name>' # fill in your container name\r\n", "staging_dirname = \"historical-log-dump\" # Name of directory to which logs were written in step 2.\r\n", "staging_filename = \"signin_logs.json\" # Name of file to which logs were written in step 2.\r\n", "\r\n", "historical_logs_adls_path = (\r\n", " f\"abfss://{staging_container_name}@{staging_account_name}.dfs.core.windows.net/\"\r\n", " f\"{staging_dirname}/{staging_filename}\"\r\n", ")\r\n", "\r\n", "df = spark.read.json(historical_logs_adls_path)" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "Next, we use the `TimeGenerated` column to create the year, month, day, etc. columns we will use as partition keys." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "%%synapse\r\n", "\r\n", "from pyspark.sql.functions import col, lit\r\n", "import pyspark.sql.functions as F\r\n", "\r\n", "time_col = col('TimeGenerated')\r\n", "year_col = F.year(time_col).alias('y')\r\n", "month_col = F.month(time_col).alias('month')\r\n", "day_col = F.dayofmonth(time_col).alias('d')\r\n", "hour_col = F.hour(time_col).alias('h')\r\n", "minute_col = F.minute(time_col)\r\n", "five_min_bucket_col = (minute_col - (minute_col % 5)).alias('m')\r\n", "\r\n", "df = df.select('*', year_col, month_col, day_col, hour_col, five_min_bucket_col)\r\n", "\r\n", "partition_col_names = ['y', 'month', 'd', 'h', 'm']\r\n", "df.select(partition_col_names).show(5)" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "# Optional - how many files will be written? (This could be over 100,000 files for a year's worth of data!)\r\n", "\r\n", "# %%synapse\r\n", "# df.select(partition_col_names).distinct().count()" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "Here we repartition the data on the partition keys created above, then write out one file per partition to the same location as the continuous log export. This will mean that continuously exported data and historical log data can be read from in a unified way and can also increase read performance for future use of the exported data.\r\n", "\r\n", "> Fill in the details of the storage account, ADLS container to which to write the repartitioned data (usually you will want this to be the same container to which the continuous export tool is writing logs). \r\n", "> Also filling in the Sentinel workspace subscription, resource group and workspace name ensures that logs are written to the same path as continuously exported logs." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "%%synapse\r\n", "\r\n", "# Fill in your ADLS account and container and your Sentinel ws subscription, rg and ws name\r\n", "account_name = '<storage account name>' # fill in your primary account name\r\n", "container_name = '<container name>' # fill in your container name\r\n", "subscription_id = '<subscription id>' # fill in the subscription id of your Sentinel workspace\r\n", "resource_group = '<resource group>' # fill in your resource groups for your Sentinel workspace\r\n", "workspace_name = '<Microsoft sentinel/log analytics workspace name>' # fill in your workspace name\r\n", "\r\n", "continuous_export_base_path = (\r\n", " f\"abfss://{container_name}@{account_name}.dfs.core.windows.net/WorkspaceResourceId=/\"\r\n", " f\"subscriptions/{subscription_id}/\"\r\n", " f\"resourcegroups/{resource_group.lower()}/\"\r\n", " f\"providers/microsoft.operationalinsights/\"\r\n", " f\"workspaces/{workspace_name.lower()}\"\r\n", ")\r\n", "\r\n", "# The extra use of `repartition` here ensures that we only write one file per partition\r\n", "df.repartition(*partition_col_names).write.partitionBy(partition_col_names).json(continuous_export_base_path, mode='append')" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "### Stop Spark Session" ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "%synapse stop" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "markdown", "source": [ "# 4. Rename Some Directories\r\n", "\r\n", "There is one remaining difference between the partition scheme used by the continuous export and the historical data we have exported - namely, the \"month\" column needs to be renamed to \"m\". We can make this change by using the ADLS Python SDK. \r\n", "That is, we need to change the directory structure from `{base_path}/y=<year>/`**`month=<month>`**`/d=<day>/h=<hour>/m=<5-minute-interval>` to `{base_path}/y=<year>/`**`m=<month>`**`/d=<day>/h=<hour>/m=<5-minute-interval>`\r\n", "\r\n", "> **Note:** This type of change can be done efficiently due to the nature of the ADLS gen2 hierarchical filesystem (see [details](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-namespace)). \r\n", "> Doing a similar task on a standard blob storage container would be much slower since the pseudo-filesystem means that each of the blobs within the high-level 'directory' would need to be individually renamed under the hood." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "# Fill in the details of the storage account and ADLS container to which the repartitioned log data was written\r\n", "account_name = '<storage account name>' # fill in your account name\r\n", "container_name = '<container name>' # fill in your container name\r\n", "\r\n", "# In production, make sure any keys are stored and retrieved securely (e.g. using Azure Key Vault) - don't store keys as plaintext!\r\n", "account_key = '<storage-account-key>' # replace your storage account key\r\n", "\r\n", "# Fill in the details of your **Sentinel workspace**\r\n", "subscription_id = '<subscription id>' # fill in the subscription id of your Sentinel workspace\r\n", "resource_group = '<resource group>' # fill in your resource groups for your Sentinel workspace\r\n", "workspace_name = '<Microsoft sentinel/log analytics workspace name>' # fill in your workspace name\r\n", "\r\n", "continuous_export_base_path = (\r\n", " f\"abfss://{container_name}@{account_name}.dfs.core.windows.net/WorkspaceResourceId=/\"\r\n", " f\"subscriptions/{subscription_id}/\"\r\n", " f\"resourcegroups/{resource_group.lower()}/\"\r\n", " f\"providers/microsoft.operationalinsights/\"\r\n", " f\"workspaces/{workspace_name.lower()}\"\r\n", ")\r\n", "\r\n", "rename_month_partition_dirs(\r\n", " adls_root_search_path=continuous_export_base_path,\r\n", " container_name=container_name,\r\n", " storage_account_name=account_name,\r\n", " storage_account_key=account_key,\r\n", ")" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": false }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1651764683879 } } }, { "cell_type": "markdown", "source": [ "# 5. Cleanup\r\n", "\r\n", "Our exported historical log data is now ready. We can now remove any data from the intermediate staging location.\r\n", "\r\n", "> **Note:** You may want to navigate to your Azure storage account and check that the data has been written correctly before deleting any data." ], "metadata": { "nteract": { "transient": { "deleting": false } } } }, { "cell_type": "code", "source": [ "delete_directory(\r\n", " adls_dirname=staging_dirname,\r\n", " container_name=staging_container_name,\r\n", " storage_account_name=staging_account_name,\r\n", " storage_account_key=staging_account_key\r\n", ")" ], "outputs": [], "execution_count": null, "metadata": { "jupyter": { "source_hidden": false, "outputs_hidden": true }, "nteract": { "transient": { "deleting": false } }, "gather": { "logged": 1651830386210 } } } ], "metadata": { "kernelspec": { "name": "python38-azureml", "language": "python", "display_name": "Python 3.8 - AzureML" }, "language_info": { "name": "python", "version": "3.8.5", "mimetype": "text/x-python", "codemirror_mode": { "name": "ipython", "version": 3 }, "pygments_lexer": "ipython3", "nbconvert_exporter": "python", "file_extension": ".py" }, "kernel_info": { "name": "python38-azureml" }, "microsoft": { "host": { "AzureML": { "notebookHasBeenCompleted": true } } }, "nteract": { "version": "nteract-front-end@1.0.0" } }, "nbformat": 4, "nbformat_minor": 2 }